3 research outputs found

    Proximity measures based on KKT points for constrained multi-objective optimization

    Get PDF
    An important aspect of optimization algorithms, for instance evolutionary algorithms, are termination criteria that measure the proximity of the found solution to the optimal solution set. A frequently used approach is the numerical verification of necessary optimality conditions such as the Karush-Kuhn-Tucker (KKT) conditions. In this paper, we present a proximity measure which characterizes the violation of the KKT conditions. It can be computed easily and is continuous in every efficient solution. Hence, it can be used as an indicator for the proximity of a certain point to the set of efficient (Edgeworth-Pareto-minimal) solutions and is well suited for algorithmic use due to its continuity properties. This is especially useful within evolutionary algorithms for candidate selection and termination, which we also illustrate numerically for some test problems

    An approximation algorithm for multi-objective optimization problems using a box-coverage

    Get PDF
    For a continuous multi-objective optimization problem, it is usually not a practical approach to compute all its nondominated points because there are infinitely many of them. For this reason, a typical approach is to compute an approximation of the nondominated set. A common technique for this approach is to generate a polyhedron which contains the nondominated set. However, often these approximations are used for further evaluations. For those applications a polyhedron is a structure that is not easy to handle. In this paper, we introduce an approximation with a simpler structure respecting the natural ordering. In particular, we compute a box-coverage of the nondominated set. To do so, we use an approach that, in general, allows us to update not only one but several boxes whenever a new nondominated point is found. The algorithm is guaranteed to stop with a finite number of boxes, each being sufficiently thin

    A Solver for Multiobjective Mixed-Integer Convex and Nonconvex Optimization

    Get PDF
    This paper proposes a general framework for solving multiobjective nonconvex optimization problems, i.e., optimization problems in which multiple objective functions have to be optimized simultaneously. Thereby, the nonconvexity might come from the objective or constraint functions, or from integrality conditions for some of the variables. In particular, multiobjective mixed-integer convex and nonconvex optimization problems are covered and form the motivation of our studies. The presented algorithm is based on a branch-and-bound method in the pre-image space, a technique which was already successfully applied for continuous nonconvex multiobjective optimization. However, extending this method to the mixed-integer setting is not straightforward, in particular with regard to convergence results. More precisely, new branching rules and lower bounding procedures are needed to obtain an algorithm that is practically applicable and convergent for multiobjective mixed-integer optimization problems. Corresponding results are a main contribution of this paper. What is more, for improving the performance of this new branch-and-bound method we enhance it with two types of cuts in the image space which are based on ideas from multiobjective mixed-integer convex optimization. Those combine continuous convex relaxations with adaptive cuts for the convex hull of the mixed-integer image set, derived from supporting hyperplanes to the relaxed sets. Based on the above ingredients, the paper provides a new multiobjective mixed-integer solver for convex problems with a stopping criterion purely in the image space. What is more, for the first time a solver for multiobjective mixed-integer nonconvex optimization is presented. We provide the results of numerical tests for the new algorithm. Where possible, we compare it with existing procedures
    corecore